-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
migrate AWS Redshift services to AWS SDK v2 #50600
base: gavinfrazar/update-awsconfig
Are you sure you want to change the base?
migrate AWS Redshift services to AWS SDK v2 #50600
Conversation
8c0f012
to
718922b
Compare
606a89a
to
0923003
Compare
59f50d7
to
d7b998d
Compare
for pageNum := 0; pageNum <= maxAWSPages && pager.HasMorePages(); pageNum++ { | ||
page, err := pager.NextPage(ctx) | ||
if err != nil { | ||
return nil, libcloudaws.ConvertRequestFailureErrorV2(err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
are we able to inject the error conversion in the to stack say in awsconfig
like previously discussed or we have opted to convert per api call?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the migration effort I think we should stick to converting per api call to be close to the original code - in later PRs we can remove the conversion helpers and all callers to use the middleware
awsconfig.WithCredentialsMaybeIntegration(cfg.Integration), | ||
awsconfig.WithIntegrationCredentialProvider(cfg.IntegrationCredentialProviderFn), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: wonder if we could merge these two lines, or move the IntegrationCredentialProviderFn
to the config provider creation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm, good point, but I think in a separate PR I'm just going to eliminate awsconfig.GetConfig
altogether so that everything goes through awsconfig.Cache
instead.
With the free function removed, we can instead provide the integration credentials provider to the cache when we create it, like we did for v1 originally:
if c.CloudClients == nil {
awsIntegrationSessionProvider := func(ctx context.Context, region, integration string) (*session.Session, error) {
return awsoidc.NewSessionV1(ctx, c.AccessPoint, region, integration)
}
cloudClients, err := cloud.NewClients(
cloud.WithAWSIntegrationSessionProvider(awsIntegrationSessionProvider),
)
if err != nil {
return trace.Wrap(err, "unable to create cloud clients")
}
c.CloudClients = cloudClients
}
I think we moved this option into a per-call basis because when the awsconfig
package was created it provided a free func GetConfig
instead of a cache.
Not a big change (I already eliminated all non-test calls to awsconfig.GetConfig
), but has nothing to do with migrating redshift
lib/cloud/mocks/aws_sts.go
Outdated
// Every fake client will retrieve its credentials if it has them, and then | ||
// delegate the AssumeRole call to the root faked client. | ||
// In this way, each role in a chain of roles will be assumed and recorded | ||
// by the root fake STS client. | ||
if m.credentialProvider != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
curious how role chaining work with the sts mock.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The uncached awsconfig.GetConfig
only uses a credential cache on the last role and then retrieves chained credentials:
func getConfigForRoleChain(ctx context.Context, cfg aws.Config, roles []AssumeRole, newCltFn AssumeRoleClientProviderFunc) (aws.Config, error) {
for _, r := range roles {
cfg.Credentials = getAssumeRoleProvider(ctx, newCltFn(cfg), r)
}
if len(roles) > 0 {
// no point caching every assumed role in the chain, we can just cache
// the last one.
cfg.Credentials = aws.NewCredentialsCache(cfg.Credentials, awsCredentialsCacheOptions)
if _, err := cfg.Credentials.Retrieve(ctx); err != nil {
return aws.Config{}, trace.Wrap(err)
}
}
return cfg, nil
}
In real code, calling cfg.Credentials.Retrieve
will recursively retrieve credentials for each role in the chain,
e.g. if the chain is IMDS -> role1 -> role2
, then the recursive calls are (role2).Retrieve -> (role1).Retrieve -> (IMDS).Retrieve
.
To mirror this behavior, the mock client calls Retrieve
if it has a credentials provider (it's only given a cred provider in the chained role case) and then calls AssumeRole
so that all roles are recorded by the root client.
TBH we should probably not rely on recording API calls like this, but several test rely on it.
I've pushed a change that hopefully makes it a little more clear what it's doing and that it's doing it to centralize role recording: diff
d7b998d
to
398e71c
Compare
398e71c
to
9856557
Compare
This PR migrates all uses of AWS Redshift API to AWS SDK v2.
The PR depends on and is stacked on top of another PR for credential caching